167 research outputs found

    Land use classification using deep multitask networks

    Get PDF

    Building outline extraction from aerial imagery and digital surface model with a frame field learning framework

    Get PDF
    Deep learning-based semantic segmentation models for building delineation face the challenge of producing precise and regular building outlines. Recently, a building delineation method based on frame field learning was proposed by Girard et al. (2020) to extract regular building footprints as vector polygons directly from aerial RGB images. A fully convolution network (FCN) is trained to learn simultaneously the building mask, contours, and frame field followed by a polygonization method. With the direction information of the building contours stored in the frame field, the polygonization algorithm produces regular outlines accurately detecting edges and corners. This paper investigated the contribution of elevation data from the normalized digital surface model (nDSM) to extract accurate and regular building polygons. The 3D information provided by the nDSM overcomes the aerial images’ limitations and contributes to distinguishing the buildings from the background more accurately. Experiments conducted in Enschede, the Netherlands, demonstrate that the nDSM improves building outlines’ accuracy, resulting in better-aligned building polygons and prevents false positives. The investigated deep learning approach (fusing RGB + nDSM) results in a mean intersection over union (IOU) of 0.70 in the urban area. The baseline method (using RGB only) results in an IOU of 0.58 in the same area. A qualitative analysis of the results shows that the investigated model predicts more precise and regular polygons for large and complex structures
    • …
    corecore